← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 4 โ€ข Sub-Lesson 1

๐Ÿงญ Ethical Frameworks & Four Lenses

Why existing research ethics fall short, and four philosophical traditions for navigating AI dilemmas

What We'll Cover

This session comes with a major caveat: I am not an ethicist and so in future versions of this course, this section will be done in collaboration with those for whom this is their area of expertise. Much of the following is a very simplified view, but hopefully one which makes you think. This caveat is a very important one to keep in mind throughout.

AI introduces ethical questions that traditional research ethics frameworks were never designed to answer. Institutional Review Boards, the Belmont Report, and informed consent protocols emerged in a world where humans were the sole producers of research outputs. They remain necessary โ€” but they are not sufficient for a world where AI can co-author papers, generate synthetic data, code analyses, and write literature reviews.

This session introduces four complementary philosophical traditions โ€” consequentialism, deontology, virtue ethics, and ubuntu โ€” as lenses for reasoning through the ethical dilemmas that AI creates in research. The goal is not to prescribe rules, but to develop your capacity for ethical reasoning. Different lenses will often produce different answers. That is the point.

Note also that these are not the only ethical frameworks, and again, this should be seen as an extremely simplified view of each of them.

๐Ÿ” The Ethics Gap

Existing research ethics frameworks address important concerns โ€” but they leave significant gaps when it comes to AI-assisted research.

What Existing Frameworks Cover

Traditional research ethics, developed largely in response to historical abuses, focus on:

  • Human participant protections: Informed consent, minimising harm, equitable selection of participants (the Belmont Report)
  • Research integrity: Honest reporting, proper attribution, avoiding fabrication and falsification
  • Data protection: Privacy, confidentiality, secure storage
  • Institutional oversight: IRB/ethics committee review processes

These are necessary. But they were designed for a world where every word in a paper, every line of analysis, and every research decision was made by a human being.

What AI Introduces

AI-assisted research raises questions that existing frameworks do not adequately address:

  • Non-human agency: Who is responsible when AI generates content that contains errors, biases, or fabrications?
  • Training data provenance: Was the data used to train the model collected with consent? Does it represent the communities being studied?
  • Opacity: When AI contributes to analysis or writing, the reasoning process is not transparent โ€” even to the researcher
  • Authorship: Can a non-human entity be credited as a contributor to research?
  • Data privacy: What happens to sensitive research data entered into cloud-based AI tools?
  • Bias amplification: AI can systematically reproduce and scale biases in ways that individual researchers cannot

โš ๏ธ A Rapidly Evolving Landscape

The ethical norms around AI in research are changing faster than guidelines can be written. Journal policies, institutional guidelines, and funder requirements that exist today may be outdated within months. Throughout this week, I encourage you to check the most recent versions of any policies we reference โ€” and to treat any framework (including the ones we present here) as a tool for reasoning, not as a definitive answer.

๐Ÿ“„ Foundational Mapping

Jobin, A., Ienca, M., & Vayena, E. (2019): "The Global Landscape of AI Ethics Guidelines"  โ€” A comprehensive analysis of 84 AI ethics documents from around the world, identifying convergence around principles like transparency, justice, and non-maleficence โ€” but also significant divergence in how these principles are interpreted and operationalised. A useful starting point for understanding the breadth of the AI ethics conversation. Note that this was from 2019, so in many ways is already very outdated.

๐Ÿ”Ž Four Philosophical Lenses

Rather than searching for the single "correct" ethical framework, we introduce four complementary traditions. Each illuminates different aspects of AI ethics dilemmas. Think of them as tools in a toolkit โ€” each useful for different tasks.

1. Consequentialism

Focus: Outcomes and consequences

An action is ethical if it produces the best overall consequences. The most familiar version โ€” utilitarianism โ€” asks: does this action maximise wellbeing and minimise harm, across all those affected?

Key question for AI in research: What are the likely outcomes of using AI in this way, and do the benefits outweigh the harms โ€” for you, your participants, your field, and broader society?

  • Strengths: Pragmatic, outcome-focused, good at weighing trade-offs
  • Limitations: Consequences are hard to predict; whose consequences count? Can justify problematic means if ends are good enough

2. Deontology

Focus: Duties, rules, and rights

Some actions are inherently right or wrong, regardless of their consequences. Deontological ethics asks whether an action respects fundamental duties โ€” honesty, fairness, respect for autonomy โ€” independent of the outcome.

Key question for AI in research: Are there rules or duties that this AI use violates, regardless of how well it works? Does it respect the rights and autonomy of all involved?

  • Strengths: Clear boundaries, protects individual rights, not swayed by "the ends justify the means"
  • Limitations: Rules can conflict with each other; can be rigid in genuinely novel situations where existing rules do not apply

3. Virtue Ethics

Focus: Character and intellectual virtues

Rather than asking "what should I do?", virtue ethics asks "what kind of person am I becoming?" It focuses on cultivating virtues โ€” honesty, courage, diligence, curiosity, integrity โ€” through practice and habit.

Key question for AI in research: Does this AI use reflect and develop the intellectual virtues that good research requires? Am I becoming a better or worse researcher through my use of these tools?

  • Strengths: Emphasises personal responsibility, adaptable to new situations, attentive to character development
  • Limitations: Can be subjective; what counts as a "virtue" varies across cultures; less helpful for institutional policy

4. Ubuntu / Relational Ethics

Focus: Relationships and community

"Umuntu ngumuntu ngabantu" โ€” a person is a person through other people. Ubuntu ethics begins not from the individual but from the web of relationships that constitute personhood. Ethical action is action that strengthens the community and its bonds.

Key question for AI in research: How does this AI use affect my relationships โ€” with my research community, my participants, my students, and broader society? Does it strengthen or weaken these bonds?

  • Strengths: Attentive to collective harm, power dynamics, and the social fabric; challenges individualist assumptions
  • Limitations: Can be vague on individual decisions; risk of being invoked superficially without engaging the philosophical tradition

We will explore ubuntu and African relational ethics in much greater depth in Sub-Lesson 2.

๐Ÿ’ก Why Four Lenses, Not One?

These four frameworks will often produce different answers to the same ethical question. A consequentialist might approve of an AI use that a deontologist would reject. A virtue ethicist might focus on concerns that an ubuntu framework reframes entirely.

That disagreement is not a problem to be solved โ€” it is the essence of ethical reasoning. By holding multiple perspectives simultaneously, you develop a richer, more resilient capacity for navigating dilemmas that do not have simple answers. The goal is not to find the "right" lens, but to use all four to see what each reveals โ€” and what each misses.

๐ŸŒ Beyond Philosophical Lenses: Governance Frameworks

These four traditions are lenses for reasoning โ€” but scholars and institutions are also building concrete governance frameworks that translate ethical values into structured tools for assessing AI systems. One particularly important example is the Research ICT Africa (RIA) Just AI Framework of Inquiry (Chetty & Sey, 2025), which organises nine interconnected inquiries โ€” from human rights and data justice to sustainability and economic justice โ€” into an actionable framework for AI research and policy, developed explicitly from and for African contexts. We will examine the RIA framework in detail in Sub-Lesson 2, alongside the deeper exploration of ubuntu.

๐Ÿงช Applying the Lenses: A Quick Example

To see how the four lenses work in practice, consider a straightforward scenario.

๐Ÿ“‹ Scenario: The Rewritten Discussion

A PhD student uses ChatGPT to rewrite the discussion section of their thesis chapter. The AI-generated version is substantially better than what the student wrote originally โ€” better structured, more clearly argued, and more effectively situated in the literature. The student submits the rewritten version without disclosing AI use. Their supervisor does not have an explicit policy on AI tools.

Consequentialist Lens

What are the outcomes? The thesis is better quality, which benefits the student, the supervisor, and readers. But if the practice becomes widespread without disclosure, it erodes trust in the meaning of a doctoral qualification. If discovered, the consequences for the student could be severe. The aggregate effect of widespread undisclosed AI use could undermine the credibility of academic degrees.

Verdict: Short-term benefits are real, but long-term systemic consequences are concerning.

Deontological Lens

Is there a duty being violated? Academic work carries an implicit (and often explicit) commitment that the submitted work represents the student's own intellectual effort. Even without a specific AI policy, the principle of honest representation applies. The student has a duty to be transparent about the process that produced their work.

Verdict: The lack of disclosure violates a duty of honesty, regardless of the quality improvement.

Virtue Ethics Lens

What kind of researcher is the student becoming? A doctoral thesis is not just a document โ€” it is a process of developing the capacity for independent scholarly thought. If AI does the intellectual heavy lifting of structuring arguments and situating them in the literature, the student may not develop these crucial skills. The question is not just about this chapter, but about intellectual formation.

Verdict: Even if the output is better, the process may undermine the student's development as a scholar.

Ubuntu Lens

How does this affect relationships? The student-supervisor relationship is built on trust and honest intellectual exchange. Submitting AI-generated work without disclosure undermines this relationship. More broadly, the student's peers โ€” who may be doing their work without AI assistance โ€” are placed at a disadvantage. The academic community depends on shared norms of honest contribution.

Verdict: The undisclosed use weakens the relational bonds that sustain academic community.

๐Ÿ’ก Notice the Convergence and Divergence

In this case, all four lenses point in a similar direction โ€” toward disclosure and caution. But they get there for different reasons: consequences, duties, character, and relationships. In more complex cases (which we will explore in Sub-Lesson 4), the lenses will diverge more sharply, and the reasoning becomes genuinely difficult. That is when having multiple frameworks is most valuable.

๐Ÿ“š Readings for This Week

Three core readings and six supplementary readings for the full week. All are freely accessible.

๐Ÿ“„ Core Reading 1

Birhane, A. (2021): "Algorithmic Injustice: A Relational Ethics Approach" โ€” Patterns , 2(2). Open access. An ubuntu-informed approach to AI ethics that argues algorithmic injustice is structural, not just a matter of individual system bias. Essential reading for Sub-Lesson 2.

๐Ÿ“„ Core Reading 2

Nature Editorial (2023): "Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules" โ€” Nature , 613. Free to read. Short and essential โ€” Nature's influential statement on AI disclosure, which set the tone for many subsequent journal policies. Primary reading for Sub-Lesson 3.

๐Ÿ“„ Core Reading 3

Mhlambi, S. (2020): "From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for AI Governance" โ€” Carr Center Discussion Paper, Harvard Kennedy School. Free PDF. The foundational paper for understanding ubuntu as a framework for AI ethics. Essential reading for Sub-Lesson 2.

๐Ÿ“„ Supplementary Readings

Lund, B.D., et al. (2023): "ChatGPT and a New Academic Reality" โ€” JASIST . Overview of the academic integrity debate around AI. Preprint freely available.

van Dis, E.A.M., et al. (2023): "ChatGPT: Five Priorities for Research" โ€” Nature , 614. Short, actionable, and widely cited. Free to read.

Okolo, C.T. (2023): "AI in the Global South: Opportunities and Challenges Towards More Equitable Governance" โ€” Brookings Institution. The equity dimensions of global AI access.

Jobin, A., Ienca, M., & Vayena, E. (2019): "The Global Landscape of AI Ethics Guidelines" โ€” Nature Machine Intelligence , 1(9). Comprehensive mapping of AI ethics frameworks worldwide.

ACM (2024): "Policy on Authorship and Use of Generative AI and Large Language Models" โ€” A discipline-specific policy example worth examining closely.

Mollick, E. (2023): "On AI and the Ethics of Disclosure" โ€” One Useful Thing (Substack). A practical, thoughtful perspective on when and how to disclose AI use in academic and professional contexts.

๐Ÿ“š Summary & Key Takeaways

This session established the foundation for ethical reasoning about AI in research.

  • The ethics gap is real: Existing research ethics frameworks (IRBs, Belmont Report, informed consent) are necessary but insufficient for AI-assisted research
  • Four complementary lenses: Consequentialism (outcomes), deontology (duties), virtue ethics (character), and ubuntu (relationships) each illuminate different dimensions of AI ethics dilemmas
  • Disagreement is productive: The four lenses will often produce different answers โ€” that divergence is not a problem but the basis for deeper ethical reasoning
  • Tools, not rules: These frameworks are not checklists to apply mechanically. They are tools for developing your capacity for ethical reasoning in a rapidly changing landscape
  • The landscape is vast: This week necessarily focuses on selected dimensions of AI ethics. Labour exploitation, surveillance and carceral AI, autonomous weapons, corporate power concentration, deepfakes and democratic erosion, gender, disability, and more are all important areas we cannot cover in depth. See The Broader Landscape of AI Ethics for an orientation to these topics and pointers for further exploration

Next session: We take a deep dive into ubuntu and African relational ethics โ€” including the Research ICT Africa Just AI Framework โ€” exploring what these traditions offer that Western individualist frameworks miss, and why they matter for AI governance globally.